Good models require good training data. For overparameterized deep models, the causal relationship between training data and model predictions is increasingly opaque and poorly understood. Influence analysis partially demystifies training's underlying interactions by quantifying the amount each training instance alters the final model. Measuring the training data's influence exactly can be provably hard in the worst case; this has led to the development and use of influence estimators, which only approximate the true influence. This paper provides the first comprehensive survey of training data influence analysis and estimation. We begin by formalizing the various, and in places orthogonal, definitions of training data influence. We then organize state-of-the-art influence analysis methods into a taxonomy; we describe each of these methods in detail and compare their underlying assumptions, asymptotic complexities, and overall strengths and weaknesses. Finally, we propose future research directions to make influence analysis more useful in practice as well as more theoretically and empirically sound. A curated, up-to-date list of resources related to influence analysis is available at https://github.com/ZaydH/influence_analysis_papers.
translated by 谷歌翻译
对抗训练实例可能会严重扭曲模型的行为。这项工作调查了经过认证的回归防御措施,该防御措施提供了保证的限制,即在训练集攻击下回归者的预测可能会发生多少变化。我们的关键见解是,使用中位数作为模型的主要决策功能时,经认证的回归减少了认证分类。将我们的减少与现有认证分类器相结合,我们提出了六个新的可证明的回归剂。就我们的知识而言,这是第一部证明单个回归预测的鲁棒性的工作,而没有任何关于数据分布和模型体系结构的假设。我们还表明,现有的最先进的认证分类器通常会做出过分的假设,可以降低其可证明的保证。我们引入了对模型鲁棒性的更严格的分析,在许多情况下,这会大大改善认证的保证。最后,我们从经验上证明了我们的方法对回归和分类数据的有效性,在1%的训练集腐败和4%以下腐败以下预测中,可以保证多达50%的测试预测准确性。我们的源代码可在https://github.com/zaydh/certified-regression上获得。
translated by 谷歌翻译
有针对性的训练集攻击将恶意实例注入训练集中,以导致训练有素的模型错误地标记一个或多个特定的测试实例。这项工作提出了目标识别的任务,该任务决定了特定的测试实例是否是训练集攻击的目标。目标识别可以与对抗性识别相结合,以查找(并删除)攻击实例,从而减轻对其他预测的影响,从而减轻攻击。我们没有专注于单个攻击方法或数据模式,而是基于影响力估计,这量化了每个培训实例对模型预测的贡献。我们表明,现有的影响估计量的不良实际表现通常来自于他们对训练实例和迭代次数的过度依赖。我们重新归一化的影响估计器解决了这一弱点。他们的表现远远超过了原始估计量,可以在对抗和非对抗环境中识别有影响力的训练示例群体,甚至发现多达100%的对抗训练实例,没有清洁数据误报。然后,目标识别简化以检测具有异常影响值的测试实例。我们证明了我们的方法对各种数据域的后门和中毒攻击的有效性,包括文本,视觉和语音,以及针对灰色盒子的自适应攻击者,该攻击者专门优化了逃避我们方法的对抗性实例。我们的源代码可在https://github.com/zaydh/target_indistification中找到。
translated by 谷歌翻译
我们如何识别培训示例,这些培训示例为最多贡献的树集合的预测?在本文中,我们介绍了TREX,这是一个解释系统,它为树合奏提供了实例归因解释,例如随机林和渐变增强树。 TREX在以前为解释深神经网络开发的代表点框架构建。由于树合奏是非可差的,我们定义了一个捕获特定树集合的结构的内核。通过在内核逻辑回归或支持向量机中使用此内核,TREX构建一个近似于原始树集合的代理模型。代理模型的内核扩展中的权重用于定义每个训练示例的全局或本地重要性。我们的实验表明,TREX的代理模型准确地逼近树合奏;其全球重要性在数据集调试方面比以前的最先进的方式更有效;其解释识别比删除和培训评估框架下的替代方法更具影响力的样品;它比替代方法运行数量幅度;其本地解释可以识别和解释由于域不匹配导致的错误。
translated by 谷歌翻译
We propose an efficient method to generate white-box adversarial examples to trick a character-level neural classifier. We find that only a few manipulations are needed to greatly decrease the accuracy. Our method relies on an atomic flip operation, which swaps one token for another, based on the gradients of the onehot input vectors. Due to efficiency of our method, we can perform adversarial training which makes the model more robust to attacks at test time. With the use of a few semantics-preserving constraints, we demonstrate that HotFlip can be adapted to attack a word-level classifier as well.
translated by 谷歌翻译
The performance of inertial navigation systems is largely dependent on the stable flow of external measurements and information to guarantee continuous filter updates and bind the inertial solution drift. Platforms in different operational environments may be prevented at some point from receiving external measurements, thus exposing their navigation solution to drift. Over the years, a wide variety of works have been proposed to overcome this shortcoming, by exploiting knowledge of the system current conditions and turning it into an applicable source of information to update the navigation filter. This paper aims to provide an extensive survey of information aided navigation, broadly classified into direct, indirect, and model aiding. Each approach is described by the notable works that implemented its concept, use cases, relevant state updates, and their corresponding measurement models. By matching the appropriate constraint to a given scenario, one will be able to improve the navigation solution accuracy, compensate for the lost information, and uncover certain internal states, that would otherwise remain unobservable.
translated by 谷歌翻译
We consider infinite horizon Markov decision processes (MDPs) with fast-slow structure, meaning that certain parts of the state space move "fast" (and in a sense, are more influential) while other parts transition more "slowly." Such structure is common in real-world problems where sequential decisions need to be made at high frequencies, yet information that varies at a slower timescale also influences the optimal policy. Examples include: (1) service allocation for a multi-class queue with (slowly varying) stochastic costs, (2) a restless multi-armed bandit with an environmental state, and (3) energy demand response, where both day-ahead and real-time prices play a role in the firm's revenue. Models that fully capture these problems often result in MDPs with large state spaces and large effective time horizons (due to frequent decisions), rendering them computationally intractable. We propose an approximate dynamic programming algorithmic framework based on the idea of "freezing" the slow states, solving a set of simpler finite-horizon MDPs (the lower-level MDPs), and applying value iteration (VI) to an auxiliary MDP that transitions on a slower timescale (the upper-level MDP). We also extend the technique to a function approximation setting, where a feature-based linear architecture is used. On the theoretical side, we analyze the regret incurred by each variant of our frozen-state approach. Finally, we give empirical evidence that the frozen-state approach generates effective policies using just a fraction of the computational cost, while illustrating that simply omitting slow states from the decision modeling is often not a viable heuristic.
translated by 谷歌翻译
In the present work we propose an unsupervised ensemble method consisting of oblique trees that can address the task of auto-encoding, namely Oblique Forest AutoEncoders (briefly OF-AE). Our method is a natural extension of the eForest encoder introduced in [1]. More precisely, by employing oblique splits consisting in multivariate linear combination of features instead of the axis-parallel ones, we will devise an auto-encoder method through the computation of a sparse solution of a set of linear inequalities consisting of feature values constraints. The code for reproducing our results is available at https://github.com/CDAlecsa/Oblique-Forest-AutoEncoders.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译